AAAI AI-Alert Ethics for Feb 1, 2020
Artificial Intelligence Needs Private Markets for Regulation--Here's Why
A regulatory market approach would enable the dynamism needed for AI to flourish in a way consistent with safety and public trust. It seems the White House wants to ramp up America's artificial intelligence (AI) dominance. Earlier this month, the U.S. Office of Management and Budget released its "Guidance for Regulation of Artificial Intelligence Applications," for federal agencies to oversee AI's development in a way that protects innovation without making the public wary. The noble aims of these principles respond to the need for a coherent American vision for AI development--complete with transparency, public participation and interagency coordination. But the government is missing something key.
- Government > Regional Government > North America Government > United States Government (0.72)
- Transportation > Air (0.52)
Philips CTO outlines ethical guidelines for AI in healthcare
The use of artificial intelligence and machine learning algorithms in healthcare is poised to expand significantly over the next few years, but beyond the investment strategies and technological foundations lie serious questions around the ethical and responsible use of AI. In an effort to clarify its own position and add to the debate, the executive vice president and chief technology officer for Royal Philips, Henk van Houten, has published a list of five guiding principles for the design and responsible use of AI in healthcare and personal health applications. The five principles – well-being, oversight, robustness, fairness, and transparency – all stem from the basic viewpoint that AI-enabled solutions should complement and benefit customers, patients, and society as a whole. First and foremost, well-being should be front of mind when developing healthcare AI solutions, van Houten argues, helping to alleviate overstretched healthcare systems, but more importantly to act as a means of supplying proactive care, informing and supporting healthy living over the course of a person's entire life. When it comes to oversight, van Houten called for proper validation and interpretation of AI-generated insights through the participation and collaboration of AI engineers, data scientists, and clinical experts.
- North America > United States > Oregon (0.06)
- North America > United States > New Jersey (0.06)
- Health & Medicine > Consumer Health (0.37)
- Banking & Finance > Insurance (0.33)
The Killer Algorithms Nobody's Talking About
This past fall, diplomats from around the globe gathered in Geneva to do something about killer robots. In a result that surprised nobody, they failed. The formal debate over lethal autonomous weapons systems--machines that can select and fire at targets on their own--began in earnest about half a decade ago under the Convention on Certain Conventional Weapons, the international community's principal mechanism for banning systems and devices deemed too hellish for use in war. But despite yearly meetings, the CCW has yet to agree what "lethal autonomous weapons" even are, let alone set a blueprint for how to rein them in. Meanwhile, the technology is advancing ferociously; militaries aren't going to wait for delegates to pin down the exact meaning of slippery terms such as "meaningful human control" before sending advanced warbots to battle.
- North America > United States (0.97)
- Asia > China (0.06)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.70)
Google's Sundar Pichai doesn't want you to be clear-eyed about AI's dangers – TechCrunch
Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies. In an op-ed published in today's Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale -- with the Google chief claiming: "AI has the potential to improve billions of lives, and the biggest risk may be failing to do so" -- thereby seeking to frame'no hard limits' as actually the safest option for humanity. Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock -- presenting "potential negative consequences" as simply the inevitable and necessary price of technological progress. It's all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
- North America > United States (0.30)
- Europe (0.30)
- Information Technology (1.00)
- Government > Regional Government (0.96)
Nations dawdle on agreeing rules to control 'killer robots' in future wars - Reuters
NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).
- North America > United States (0.75)
- Asia > Russia (0.38)
- Europe > Russia (0.27)
- (7 more...)
- Government > Military (1.00)
- Law > Civil Rights & Constitutional Law (0.94)
- Government > Regional Government > North America Government > United States Government (0.33)
How far should we let AI go? - MaRS Discovery District
The transformative power of artificial intelligence has come to preoccupy big business and government as well as academics. But as AI's potential sinks in, a growing number of policy experts -- along with some leading figures in technology -- are asking tough questions: Should these cutting-edge algorithms be regulated, taxed or even, in certain cases, blocked? Consider what AI can do in the workplace. For example, managers realize that office politics, stress and other pressures take a toll on employees. They also know that standard-issue job-satisfaction surveys "don't provide a true gauge of what's going on" around the water cooler or in the staff lunchroom, says Jonathan Kreindler, Chief Executive Officer of Receptiviti.ai.
- North America > Canada > Ontario > Toronto (0.06)
- North America > United States > Texas (0.05)
- North America > United States > New York > New York County > New York City (0.05)
- (3 more...)
- Law (1.00)
- Health & Medicine (0.72)
- Government (0.70)
- Information Technology > Security & Privacy (0.48)
White House Proposes Hands-Off Approach to AI Regulation
The White House's Office of Science and Technology Policy (OSTP) has issued a draft memo to government agencies which spells out the principles agencies must abide by when creating regulations for the use of AI. The principles are designed to achieve three goals: Ensure public engagement, limit regulatory overreach and promote trustworthy technology. The memo includes 10 principles that agencies must consider when drafting AI regulations. The memo follows on from President Trump's executive order on AI in February 2019, which set out the administration's strategy for accelerating the US's position of leadership in AI. This includes fostering public trust in AI systems by establishing appropriate governance of, and standards for the technology.
- North America > United States (1.00)
- Europe > Germany (0.05)
- Asia > China (0.05)
Obama-era tech advisors list potential challenges for the White House's AI principles
Former Obama administration advisors say the White House regulatory AI principles announced this week are a good start in many ways, but they're incorrect in their oversimplified mandate to avoid overregulation of private business use, and that the Trump administration could face an uphill battle in its appeal to the rest of the world. Though the Trump administration has developed a reputation for blaming the Obama administration when things go wrong or trying to erase Obama-era policy, on artificial intelligence policy, at times the Trump administration has remained strikingly similar to its predecessor. This was evident in the AI research and development strategy plan for federal agencies released in summer 2019. In some instances, like with White House deputy CTO and assistant director of AI at the White House Office of Science and Technology Policy (OSTP) Dr. Lynne Parker who also served in the Obama administration, the same people drive White House AI policy. The list of 10 AI principles are meant to guide US federal agencies as they consider making rules that regulate AI. White House CTO Michael Kratsios said he wants other countries around the world to adopt similar policies.
- North America > United States (1.00)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- Asia > China > Beijing > Beijing (0.05)
White House proposes guidelines for regulating the use of artificial intelligence The Star
The Trump administration is proposing new rules to guide future federal regulation of artificial intelligence used in medicine, transportation and other industries. But the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment. A document from the White House said that in deciding regulatory action, U.S. agencies "must consider fairness, non-discrimination, openness, transparency, safety, and security." The rules won't affect how federal agencies such as law enforcement use AI; they are specifically limited to how federal agencies devise new AI regulations for the private sector. There's a month-long public comment period before the rules take effect.
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
White House proposes guidelines for regulating the use of AI
The Trump administration is proposing new rules to guide future federal regulation of artificial intelligence used in medicine, transportation and other industries. But the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment. The White House said that in deciding regulatory action, U.S. agencies "must consider fairness, non-discrimination, openness, transparency, safety, and security." But federal agencies must also avoid setting up restrictions that "needlessly hamper AI innovation and growth," reads a memo being sent to U.S. agency chiefs from Russell Vought, acting director of the Office of Management and Budget. "Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits," the memo says.
- North America > United States > New York (0.05)
- Asia > China > Xinjiang Uygur Autonomous Region (0.05)